perm filename PSYCHO[W83,JMC]4 blob
sn#710688 filedate 1983-05-06 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00009 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 psycho[w83,jmc] Notes on article for Psychology Today
C00012 00003 For almost a century we have used machines in our
C00018 00004 We are interested in studying the relations between the
C00019 00005 "Place the control near the bed in a place that is neither hotter nor
C00023 00006 How to understand the servants
C00026 00007 Outline
C00028 00008 The little thoughts of thinking machines
C00034 00009 The intentional stance is most useful when it is the only
C00037 ENDMK
C⊗;
psycho[w83,jmc] Notes on article for Psychology Today
Illustrations:
0. Do machines think? Yes, but not much at present.
1. Rube Goldberg thermostat
1'. I'm just a computer, but ...
2. jmc in Tom Wolfe costume in front of ordinary thermostat.
2.5 Meet my friend the thermostat.
3. I think you'll be amused by its presumption.
It's just a naive little financial adviser program, but I think you'll
be amused by its presumption.
4. It's a good program, but when I use the airline reservation
program it gets jealous.
5. There is only one God and he is John McCarthy and the subroutine
starting at 1573426 is his prophet.
The main idea of the article is to elaborate the notion of
thermostat to a level where it is hard to describe what is known
about it without using mental qualities.
Anthropomorphism is somewhat ok.
cory.1[let,jmc]
keep reprint rights
Chris Cory 212 725-7535
illustrations
Fragmentary mental qualities
Dog looks where it usually is rather than where it left it.
Reference to "Can computers feel pain?"
Searle?
Apes and mirrors
Natural kinds
"Place the control near the bed in a place that is neither hotter nor
colder than the room itself. If the control is placed on a radiator or
radiant heated floors, it will "think" the entire room is hot and will
lower your blanket temperature, making your bed too cold. If the control
is placed on the window sill in a cold draft, it will "think" the entire
room is cold and will heat up your bed so it will be too hot." - from the
instructions to an electric blanket.
Note that it doesn't require a numerical concept of temperature.
Jumping to conclusions
It is better to oversimplify than to give up.
Sarah McCarthy - "I can, but I won't".
How much of a mind do present machines and programs have?
What do they know?
We have some complicated inductions to do. Concluding that objects are
permanent, that other people are like ourselves. Perhaps we do them,
but perhaps the results are pre-programmed and have only to mature.
Asimov
The key to the paper is the set of examples.
1. thermostat
2. fancy thermostat
3. operating system
It won't run my job because (it thinks) I don't want to run,
my core requirement is too large, I have recently run, my account
is out of money.
We could make them self-conscious, but we haven't.
What kind of computer program should consider itself
on a par with humans in some respect. If it could observe humans
sufficiently, it could learn that certain information could be
obtained in certain places. Perhaps I want my advice giving
program to observe everything I type and learn what it can do
for itself.
We are at the beginning of this, so the next steps are speculative -
even more speculative than the ultimate result.
However, we say the airline schedule is in the guide rather than
saying that the guide knows the schedule.
The business is in the phone book even if the book is in Chinese.
Reply to remarks about certain truth being socially determined.
As Laplace (Lagrange?) said to Napoleon in another connection, "I had no
need of that hypothesis".
What would be required for a computer program to have rights? To
have obligations?
We don't want Asimov's third law.
what makes us human
semi-anthropomorphic
Sherry Turkle paper in Society "blaming computers"
Chris Cory 212 725-7535
4000 words
send ascribing
$1000
relate to current issues
Relate to the design stance, the intentional stance, and Brainstorms and
Sloman.
Since everyone gets to put in plugs, I'll put in one for defense.
1. criteria for welfare or guilt of the computer.
2. A computer isn't hungry for electricity.
A person might or might not be hungry for some essential food.
The annoyance of the philosophers is like that
of a carpenter when he sees a laborer putting
up forms for concrete.
These ideas come from AI.
One answer about how to talk to the servants is to program them,
but that is inadequate, because we don't understand their programs.
1. It expects a line number.
2. It forgot me.
3. It doesn't know me.
4. Ant Hillery and the Chinese room.
5. There are large differences between the minds of humans and those
of the most intelligent programs we can make today.
No program can recognize that it is using an ambiguous concept and
then recover.
Example:
A program running an automatic travel agency.
1. It thinks I want the lowest possible fare.
2. It thinks I'm female, because my name is Evelyn.
Susie's ideas
I never thought he'd ask.
I thought he'd never ask.
All I wanted was an X, and it thinks I'm crazy.
We ascribe mental qualities in the course of designing a program.
The object to which we ascribe mental qualities may not
be as obvious as in the case of humans. Except for the reputed
multiple personalities, about which I am dubious, there is only
one mind associated with a human body. However, one computer
may be running many programs in parallel or in time-shared mode.
Moreover, one program may be interpreting another. An interesting
hypothetical example of a human running a program is John Searle's
Chinese room. Searle's conclusions were different than mine.
We aren't talking about full AI.
Q. Can we make it introspect - to examine its own thoughts.
A. Yes, we can. In one sense introspection is easier for computers
than humans.
An ape can recognize that the image in the mirror is itself while
a monkey cannot.
They think a little as well as having little thoughts, i.e.
they carry out some of the mental processes AI research has identified.
It looked for it, but it didn't find it.
Just because it knows one thing doesn't mean it knows other things.
E.g. Mycin doesn't know about life or death or doctors or hospitals.
It intends to do it, but something more important to its goals
may come up.
It has the goal of obeying, but survival comes first if survival
is threatened.
It thinks this action will achieve the goal without harmful side
effects, but this may be wrong, and it may discover that it is
wrong.
Colby? It is easier to simulate the suspicion and hostility
than to simulate the intelligence of the paranoid.
For almost a century we have used machines in our
daily lives, whose detailed functioning most of us don't understand.
Few know much about how the electric light system or the telephone system
work internally. We do know their external behavior; we know that
lights are turned on and off by switches and how to dial telephone
numbers. We may not know much about internal combustion engines, but
we know that an automobile must have more gasoline put in its tank
when the gauge reads near EMPTY.
In the next century we are increasingly faced with much
more complex computer based systems. It won't be necessary for
many people to know very much about how they work internally, but
what we will have to know about them in order to use them is more
complex than what we need to know about electric lights and
telephones.
Much that we will have to know concerns the information
contained in them. Many people already use psychological
words such as "knows", "believes", "thinks", "wants" and "likes"
in referring to computer based machines, even though these machines
are quite different from humans, and these words arose from
the human need talk about other humans.
According to many authorities, to use the language of mind to
talk about machines is to commit the intellectual sin of anthropomorphism.
Anthropomorphism is a sin all right, but it is going to be increasingly
difficult to understand machines without using mental terms.
Researchers in articificial intelligence are interested
in the use of mental terms to describe machines for two reasons.
First we have to provide the machines with theories of knowledge
and belief so they can reason about what their users know, don't know,
and want. Second what a user knows about a machine can often be
best expressed using mental terms.
Suppose someone says, "The dog wants to go out".
Because the dog doesn't always want to go out, the statement is informative.
Moreover, the statement is non-commital about what the dog is doing
or will do to further this desire. While most dogs aren't very subtle,
in principle, the clues that cause us to believe, it wants to go out
could be quite intuitive and a person might not be able to say how
he knows.
So when is it useful to say that a computer program wants something?
First, it must give information. It must tell us something about
how this particular occasion is different from other occasions or
how this program is different from other programs. Second, it is
most useful when we can't say what the program will do to realize
its want - when it has several alternative actions and perhaps
others we don't know about. Third, we might not know
why we believe it wants it. Finally, we instead of talking about
a particular situation, we may be making a general remark. "When
the program wants more paper in the printer, it beeps and displays
a message".
Someone may complain that saying the computer wants more
paper is simply a longwinded way of saying that the printer is
out of paper. However, this may not accurately describe the condition
under which the beep occurs. First, the computer may ask for more
paper under more complicated conditions that we don't precisely
know. For example, when it expects a long print job or expects the
competent operator to go off duty, it may ask for more paper earlier than usual.
Secondly, it may mistakenly believe it is almost out of paper.
For these reasons, it is often more informative, and there is less
risk of error to use the phrase "wants more paper".
We are interested in studying the relations between the
states of certain machines and certain English sentences.
The simplest example is the relation between the state of a
thermostat and the English sentence "The room is too cold".
We don't need the word "thinks" in order to understand
how a thermostat works.
"Place the control near the bed in a place that is neither hotter nor
colder than the room itself. If the control is placed on a radiator or
radiant heated floors, it will "think" the entire room is hot and will
lower your blanket temperature, making your bed too cold. If the control
is placed on the window sill in a cold draft, it will "think" the entire
room is cold and will heat up your bed so it will be too hot." - from the
instructions to an electric blanket.
I suppose most philosophers, psychologists and English teachers
would maintain that the electric blanket manufacturer is guilty of
anthropomorphism in the above instructions, and some will claim that great
harm will come from thus ascribing to machines properties which only
humans can have. I shall argue that saying that the blanket control will
"think" is ok; they could even have left off the quotes. Moreover, our
daily lives will more and more involve interacting with much more
sophisticated computer controlled machines. Understanding and explaining
their behavior well enough to make good use of them will more and more
require ascribing mental qualities to them.
Don't get me wrong. The ordinary anthropomorphism, in which
a person says, "This terminal hates me" and bashes it, is just
as silly as ever. I will argue that there can be machines that
might "hate" you, but it would have to be a lot more complex than
an ordinary computer terminal. The question isn't whether machines have mental
qualities. The real question is this.
Under what conditions is it useful to ascribe what mental qualities to
what machines?
Answer: When it says something informative that cannot as conveniently
be said some other way.
The thermostat is a nice example, because we can describe
its behavior both ways - using mentalistic words (as above) or purely
in physical terms.
The machine promises to do something. It owes me $5.00 and an
explanation.
Why computers have a bad name. Lack of on-line ness. They're unconscious
most of the time.
Identifying the computer with the program is often a mistake. Searliness.
How to understand the servants
This article is about how to understand your servants and
how to speak to them.
In the next twenty years people will get more and more services
through direct interaction with computers.
We can regard computers as mechanical servants just as we regard
electric lights. The power system that gets us electric
light is very complicated with its nuclear power stations,
its transmission system with power sharing among utilities,
and its complicated financial and legal institutions. However,
to use it, we only need know how to work a switch and to change
a light bulb. Some uses of a computer are equally simple, but
to get the most use out of a computer, we have to understand
a lot more about it than how to work an on-off switch.
Fortunately, we will be able to use many intuitive psychological
concepts we have developed for understanding our fellow humans provided we
are careful to use what concepts are appropriate and avoid those that may
be tempting but are misleading. We can often best express what
we know about a machine, especially a computer controlled machine,
by saying that it ⊗believes something, that it ⊗wants something,
that it has ⊗promised something or that it ⊗intends to do something.
A machine may sometimes appropriately tell us, %2"I can, but I won't"%1.
The ideas we will use come from artificial intelligence research.
Many people will object to thus ascribing mental qualities to
machines, and their objections have a substantial basis in past
experience. Firstly, it is a common joke to ascribe emotions
to machines. Murphy's laws are examples. Also %2"My car
hates me"%1.
Outline
good and bad anthropomorphism
how do we come to ascribe mental qualities to other people and
to ourselves?
examples of mental qualities and the criteria for their ascription
examples of systems that presently exhibit mental qualities
example of a future system with more mental qualities
conclusion
Life will be more interesting with computers to psychologize.
They will have the psychology that is convenient to their designers.
They'll be fascist bastards if the programmers don't think twice.
References:
My article
Brainstorms
Margaret Boden
Nilsson's book and Nilsson's readings
People will say a computer promised without first reading Searle on
speech acts.
The little thoughts of thinking machines
Does my computer love me?
No, it doesn't.
Can you make one that will love me?
Well, maybe. It might be difficult, though.
Self-conscious.
Should we use words like ⊗believes, ⊗wants, ⊗intends, ⊗promises,
and ⊗owes in explaining computer programs?
Computer programs are already very complicated and they will get
more so.
Who can find me a really complicated temperature control mechanism?
Well, I can make it love you, but then it will be jealous when
you use other computers.
JOE'S PROGRAMMING SHOP
ACME PROGRAMMING
Certainly I can make the program love you.
Do you want it to be jealous when you use other programs
in the same computer?
There is only one God and he is John McCarthy and
his prophet begins at location 37254.
Does it intend to pay me?
It promised to pay me.
It owes me the money.
It threatened not to pay me (or to give me a ticket).
It confused me with the other John McCarthy.
It hopes ...
It is waiting for me to ...
It doesn't expect me to ...
It will charge what it thinks you can afford.
Praise Dennett.
The philosopher Daniel Dennett has recently clarified these issues
in an elegant way. He distinguishes four ways of predicting the behavior
and future
of a system - the ⊗physical, ⊗design, ⊗intentional and ⊗astrological
stances.
The physical stance considers the structure of the system in
terms of atoms, etc. and applies the laws of physics to determine
what will happen. Actually we may take the physical structure
at various levels of detail according to the information available
and our ability to compute.
The design stance considers the parts of the system as having
been designed for a purpose and analyzes it in these terms. For example,
we can determine when an alarm clock is likely to go off by looking
at its exterior and using our belief that it was designed to function
as an alarm clock. It doesn't much matter whether the alarm clock is
built of springs and gears or has an electric motor or is built from
an integrated circuit. We don't need these physical facts and often
they aren't available or the user wouldn't understand them anyway.
The intentional stance analyzes the system in terms of beliefs
and purposes. We suppose it will act in ways that it believes will
fulfill its purposes.
Modern science is committed to the principal that the physical
stance will always work in principle, i.e. that all phenomena have
a physical basis. None of the others are so universal. It doesn't
help to analyze a mountain as having been built for a purpose
nor does it help to regard a mountain as having purposes of its own.
Once it was customary to regard the existence of the ant as having
the purpose of teaching us not to be lazy and the rainbow has having
the purpose of warning us that the next time God destroyed the world
it would be by fire and not by water. These attributions of purpose
turned out unhelpful in further understanding the phenomena in question.
However, it is often useful to ask what purpose some part of an ant
serves in keeping the ant functioning. This is because natural selection
resulted in the ant having parts that serve such purposes.
For variety Dennett mentions the astrological stance. It says the
way to think about the future of a human is to pay attention to
the configuration of the stars when he was born. To determine whether
an enterprise will succeed, we determine whether the signs are
favorable. The astrological stance is clearly distinct from the
others, but it is worthless.
We understand human mental structure only
somewhat better than fish understand swimming.
The intentional stance is most useful when it is the only
way of expressing what we know about a system. However,
if we make that part of the definition, the itentional stance
will always be mysterious. Therefore, we study intentionally
systems that can also be understood phsyically.
******
Self consciousness
The core of the human feeling of difference from machines and animals
is the feeling of self consciousness. When shall we say a machine
is self conscious? As usual we will distinguish different kinds
and levels.
1. Introspection is the ability to examine one's own thoughts.
At one level, this is very easy for a computer to do. The program
has the same form as data, and a computer can examine it. A program
can also emulate itself with given input. In other words it is easy
to make computers answer questions like, "What would I do in a certain
situation, where the situation is defined by the data that would
be available?" There is one important limitation. The time taken
to answer a question about a computation
must be longer than the time it would take to do the computation.
Otherwise we could set up a paradox by making a program decide what
it would do in the present situation and, having decided, doing
something different.
The most interesting mental quality is self-consciousness.
We will discuss some of its easier aspects and leave the rest as
an exercise for the reader.
introspection, self as a material object like other objects,
self as a person compared to other persons,